Crossmodal action: modality matters.

نویسندگان

  • Lynn Huestegge
  • Eliot Hazeltine
چکیده

Research on multitasking harkens back to the beginnings of cognitive psychology. The central question has always been how we manage to perform multiple actions at the same time. Here, we highlight the role of specific inputand output-modalities involved in coordinating multiple action demands (i.e., crossmodal action). For a long time, modalityand content-blind models of multitasking have dominated theory, but a variety of recent findings indicate that modalities and content substantially determine performance. Typically, the term ‘‘input modality’’ refers to sensory channels (e.g., visual input is treated differently from auditory input), and the term ‘‘output modality’’ is closely associated with effector systems (e.g., hand vs. foot movements). However, this definition may be too narrow. The term ‘‘input modality’’ sometimes refers to a dimension within a sensory channel (e.g., shape/color in vision). Furthermore, the linkage between output-modalities and effector systems may not be specific enough to illuminate some notorious twilight zones (e.g., to distinguish between hand and wrist movements). As a consequence, we will use ‘‘modality’’ as an umbrella term here to capture various sources of stimulus variability used to differentiate the task-relevant information and sources of motor variability used to differentiate responses. Many of the pioneering studies involved the observation of dual-task performance in two continuous tasks that typically consisted of complex action sequences (e.g., reading and writing, see Solomons & Stein, 1896; Spelke, Hirst, & Neisser, 1976). However, it soon became apparent that tighter experimental control was necessary to pinpoint the specific cognitive mechanisms supporting multitasking. The PRP paradigm: an experimental breakthrough. The development of the psychological refractory period (PRP) paradigm (Telford, 1931; Welford, 1952) provided a methodological breakthrough that allowed researchers to exactly control the flow of information in both tasks. The PRP paradigm involves two elementary tasks with a limited set of clearly defined stimuli and responses. The mechanisms underlying multitasking are studied by systematically manipulating the temporal overlap of the two tasks, which is achieved by varying the delay between the presentations of the stimuli for the two tasks (stimulus onset asynchrony, SOA). The PRP effect refers to the typical finding that reaction times (RTs) for the second task increase with decreasing SOA, an effect that has been replicated in numerous studies with a variety of stimulus and response modalities (see Bertelson, 1966; Pashler, 1994; Smith, 1967). The RSB model: a powerful explanatory concept? The most influential and elegant account of the PRP effect has been the response selection bottleneck (RSB) model (Telford, 1931; Welford, 1952). A starting assumption of the RSB model is that tasks at hand can be divided into three successive cognitive processing steps, namely perceptual processing (i.e., stimulus encoding/categorization), response selection (i.e., deciding which response corresponds to the stimulus according to the task rules), and response execution processes. In a number of experiments, the duration of each of these processing stages was systematically manipulated for each of the two tasks (see Pashler, 1994). As a result, the most convincing hypothesis to accommodate the corresponding findings was the assumption that perceptual processing and response L. Huestegge (&) RWTH Aachen University, Aachen, Germany e-mail: [email protected]

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Visual adaptation enhances action sound discrimination

Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perc...

متن کامل

Influences of intra- and crossmodal grouping on visual and tactile Ternus apparent motion.

Previous studies of dynamic crossmodal integration have revealed that the direction of apparent motion in a target modality can be influenced by a spatially incongruent motion stream in another, distractor modality. Yet, it remains to be examined whether non-motion intra- and crossmodal perceptual grouping can affect apparent motion in a given target modality. To address this question, we emplo...

متن کامل

Crossmodal links in spatial attention between vision, audition, and touch: evidence from event-related brain potentials.

Results from event-related potential (ERP) studies are reviewed that investigated crossmodal links in spatial attention between vision, audition and touch to find out which stages in the processing of sensory stimuli are affected by such crossmodal links. ERPs were recorded in response to visual, auditory, and tactile stimuli under conditions where attention was directed to a specific location ...

متن کامل

Spatial attention and crossmodal interactions between vision and touch.

In the present paper, we review several functional imaging studies investigating crossmodal interactions between vision and touch relating to spatial attention. We asked how the spatial unity of a multimodal event in the external world might be represented in the brain, where signals from different modalities are initially processed in distinct brain regions. The results highlight several links...

متن کامل

UNIVERSITE PARIS - SUD ÉCOLE DOCTORALE : Ecole Doctorale Informatique

Real-time simulation of complex audio-visual scenes remains challenging due to the technically independent but perceptually related rendering process in each modality. Because of the potential crossmodal dependency of auditory and visual perception, the optimization of graphics and sound rendering, such as Level of Details (LOD), should be considered in a combined manner but not as separate iss...

متن کامل

Crossmodal integration for perception and action.

The integration of information from different sensory modalities has many advantages for human observers, including increase of salience, resolution of perceptual ambiguities, and unified perception of objects and surroundings. Several behavioral, electrophysiological and neuroimaging data collected in various tasks, including localization and detection of spatial events, crossmodal perception ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Psychological research

دوره 75 6  شماره 

صفحات  -

تاریخ انتشار 2011